119 research outputs found

    Minimizing Negative Impacts Caused by Emergency Vehicle Preemption on Arterial Signal Coordination

    Get PDF
    Emergency vehicles are essential means of timely and urgent rescue. Emergency vehicle preemption (EVP), the most common form of traffic signal preemption in urban areas, is usually activated to grant emergency vehicles the right of way at signalized intersections by abruptly terminating the current signal timing plan. However, such mandatory signal changes can disrupt arterial signal coordination by terminating the current timing plan at each intersection and triggering unexpected transitions in traffic signal controllers at the end of the preemption. To comprehensively analyze and minimize such negative impacts caused by EVP, this research summarized optical-based EVP operations along with preemption modules in Trafficware 980 ATC V76 signal controllers to illustrate their negative impacts on arterial signal coordination. Hardware-In-the-Loop Simulation (HILS) was utilized to evaluate and compare five exit strategies of EVP in Trafficware 980 ATC V76 signal controllers: no exit phase, exit to fixed phases, exit to fixed phases (end dwell), Coord+Preempt and the dynamic threshold timer under defined five simulation scenarios with different numbers of preemptions. The evaluations were carried out using a signalized arterial, N. McCarran Blvd, which includes two intersections in Reno, Nevada.It was found that the extent of negative impacts on arterial signal coordination was highly influenced by preemption activation points, routes of the emergency vehicle, the selection of an exit strategy and the number of preemptions. Average vehicle delay on the non-preempted arterial movements tended to rise with an increasing number of preemptions. Among those five exit strategies, no exit phase strategy delivered worse performance than others especially on the arterial through movements. End dwell and the dynamic threshold timer provided more flexibility of phase return than exiting to fixed phases only, which potentially enhances performance on the main-street and side-streets. Coord+Preempt was found to be the best exit strategy to be implemented due to its ability to proceed back to the background coordination without transition and produce the overall minimal delay for all turning movements under different numbers of preemptions

    Model Order Estimation in the Presence of multipath Interference using Residual Convolutional Neural Networks

    Full text link
    Model order estimation (MOE) is often a pre-requisite for Direction of Arrival (DoA) estimation. Due to limits imposed by array geometry, it is typically not possible to estimate spatial parameters for an arbitrary number of sources; an estimate of the signal model is usually required. MOE is the process of selecting the most likely signal model from several candidates. While classic methods fail at MOE in the presence of coherent multipath interference, data-driven supervised learning models can solve this problem. Instead of the classic MLP (Multiple Layer Perceptions) or CNN (Convolutional Neural Networks) architectures, we propose the application of Residual Convolutional Neural Networks (RCNN), with grouped symmetric kernel filters to deliver state-of-art estimation accuracy of up to 95.2\% in the presence of coherent multipath, and a weighted loss function to eliminate underestimation error of the model order. We show the benefit of the approach by demonstrating its impact on an overall signal processing flow that determines the number of total signals received by the array, the number of independent sources, and the association of each of the paths with those sources . Moreover, we show that the proposed estimator provides accurate performance over a variety of array types, can identify the overloaded scenario, and ultimately provides strong DoA estimation and signal association performance

    Spatial Steerability of GANs via Self-Supervision from Discriminator

    Full text link
    Generative models make huge progress to the photorealistic image synthesis in recent years. To enable human to steer the image generation process and customize the output, many works explore the interpretable dimensions of the latent space in GANs. Existing methods edit the attributes of the output image such as orientation or color scheme by varying the latent code along certain directions. However, these methods usually require additional human annotations for each pretrained model, and they mostly focus on editing global attributes. In this work, we propose a self-supervised approach to improve the spatial steerability of GANs without searching for steerable directions in the latent space or requiring extra annotations. Specifically, we design randomly sampled Gaussian heatmaps to be encoded into the intermediate layers of generative models as spatial inductive bias. Along with training the GAN model from scratch, these heatmaps are being aligned with the emerging attention of the GAN's discriminator in a self-supervised learning manner. During inference, human users can intuitively interact with the spatial heatmaps to edit the output image, such as varying the scene layout or moving objects in the scene. Extensive experiments show that the proposed method not only enables spatial editing over human faces, animal faces, outdoor scenes, and complicated indoor scenes, but also brings improvement in synthesis quality.Comment: This manuscript is a journal extension of our previous conference work (arXiv:2112.00718), submitted to TPAM

    One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation

    Full text link
    Knowledge distillation~(KD) has proven to be a highly effective approach for enhancing model performance through a teacher-student training scheme. However, most existing distillation methods are designed under the assumption that the teacher and student models belong to the same model family, particularly the hint-based approaches. By using centered kernel alignment (CKA) to compare the learned features between heterogeneous teacher and student models, we observe significant feature divergence. This divergence illustrates the ineffectiveness of previous hint-based methods in cross-architecture distillation. To tackle the challenge in distilling heterogeneous models, we propose a simple yet effective one-for-all KD framework called OFA-KD, which significantly improves the distillation performance between heterogeneous architectures. Specifically, we project intermediate features into an aligned latent space such as the logits space, where architecture-specific information is discarded. Additionally, we introduce an adaptive target enhancement scheme to prevent the student from being disturbed by irrelevant information. Extensive experiments with various architectures, including CNN, Transformer, and MLP, demonstrate the superiority of our OFA-KD framework in enabling distillation between heterogeneous architectures. Specifically, when equipped with our OFA-KD, the student models achieve notable performance improvements, with a maximum gain of 8.0% on the CIFAR-100 dataset and 0.7% on the ImageNet-1K dataset. PyTorch code and checkpoints can be found at https://github.com/Hao840/OFAKD

    IFKD: Implicit field knowledge distillation for single view reconstruction

    Get PDF
    In 3D reconstruction tasks, camera parameter matrix estimation is usually used to present the single view of an object, which is not necessary when mapping the 3D point to 2D image. The single view reconstruction task should care more about the quality of reconstruction instead of the alignment. So in this paper, we propose an implicit field knowledge distillation model (IFKD) to reconstruct 3D objects from the single view. Transformations are performed on 3D points instead of the camera and keep the camera coordinate identified with the world coordinate, so that the extrinsic matrix can be omitted. Besides, a knowledge distillation structure from 3D voxel to the feature vector is established to further refine the feature description of 3D objects. Thus, the details of a 3D model can be better captured by the proposed model. This paper adopts ShapeNet Core dataset to verify the effectiveness of the IFKD model. Experiments show that IFKD has strong advantages in IOU and other core indicators compared with the camera matrix estimation methods, which verifies the feasibility of the new proposed mapping method
    • …
    corecore